Skip to content

Conversation

vcshih
Copy link

@vcshih vcshih commented Jul 21, 2025

No description provided.

rm-openai and others added 30 commits July 22, 2025 13:12
Action isn't published yet, so gotta do this
1. **Grammar fix**: Remove duplicate "can" in the sentence about
configuring trace names
2. **Correct default value**: Update "Agent trace" to "Agent workflow"
to match the actual default value in the codebase
Automated update of translated documentation
## Summary
- document `agents.realtime.model` so the RealtimeModel link works
- include the new file in the documentation navigation

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`


------
https://chatgpt.com/codex/tasks/task_i_687fadfee88883219240b56e5abba76a
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
## Summary
- Added LiteLLM to the acknowledgements section in README.md
- Recognized LiteLLM as a unified interface for 100+ LLMs, which aligns
with the SDK's provider-agnostic approach

## Test plan
- [x] Verify README renders correctly
- [x] Ensure link to LiteLLM repository is functional
The current script gives

```
Traceback (most recent call last):
  File "/Users/xmxd289/code/openai-agents-python/examples/reasoning_content/main.py", line 19, in <module>
    from agents.types import ResponseOutputRefusal, ResponseOutputText  # type: ignore
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'agents.types'
```

because it should be

`from openai.types.responses import ResponseOutputRefusal,
ResponseOutputText`

rather than

`from agents.types import ResponseOutputRefusal, ResponseOutputText`

---------

Co-authored-by: Michelangelo D'Agostino <[email protected]>
Will need this for a followup.

---
[//]: # (BEGIN SAPLING FOOTER)
* #1243
* #1242
* __->__ #1235
…behavior (#1233)

The Trace class was using @abc.abstractmethod decorators without
inheriting
from abc.ABC, which meant the abstract methods weren't enforced.

This change makes the class properly abstract while maintaining all
existing functionality
since no code directly instantiates Trace() - all usage goes through the
concrete implementations NoOpTrace and TraceImpl.
Adds LangDB AI Gateway to the External tracing processors list so
developers can stream Agent‑SDK traces directly into LangDB’s
dashboards.

## Highlights

- End‑to‑end observability of every agent step, tool invocation, and
guardrail.
- Access to 350+ LLM models through a single OpenAI‑compatible endpoint.

- Quick integration: `pip install "pylangdb[openai]" `
```python
from pylangdb.openai import init
init()

client = AsyncOpenAI(
    api_key=os.environ["LANGDB_API_KEY"],
    base_url=os.environ["LANGDB_API_BASE_URL"],
    default_headers={"x-project-id": os.environ["LANGDB_PROJECT_ID"]},
)
set_default_openai_client(client)     
```
- Live demo Thread:
https://app.langdb.ai/sharing/threads/53b87631-de7f-431a-a049-48556f899b4d
<img width="1636" height="903" alt="image"
src="https://github.com/user-attachments/assets/075538fb-c1af-48e8-95fd-ff3d729ba37d"
/>

Fixes #1222

---------

Co-authored-by: mutahirshah11 <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Updated the error message to replace the placeholder `your_type` with
`YourType`, removing the underscore and adopting PascalCase, which
aligns with standard Python type naming conventions.
The Financial Research Agent example is broken because it uses the model `gpt-4.5-preview-2025-02-27`. This model has been [removed and is no longer available](https://platform.openai.com/docs/deprecations#2025-04-14-gpt-4-5-preview).

This change follows the recommendation to replace gpt-4.5-preview with gpt-4.1.

---------
Co-authored-by: Kazuhiro Sera <[email protected]>
#1098)

### 1. Description

This PR fixes an issue where reasoning content from models accessed via
LiteLLM was not being correctly parsed into the `ChatCompletionMessage`
format. This was particularly noticeable when using reasoning models.

### 2. Context

I am using the `openai-agents-python` library in my project, and it has
been incredibly helpful. Thank you for building such a great tool!

My setup uses `litellm` to interface with `gemini-2.5-pro`. I noticed
that while the agent could receive a response, the reasoning(thinking)
from the Gemini model was lost during the conversion process from the
LiteLLM response format to the OpenAI `ChatCompletionMessage` object.

I saw that PR #871 made progress on a similar issue, but it seems the
specific response structure from LiteLLM still requires a small
adaptation. This fix adds the necessary logic to ensure that these
responses are handled.

**Relates to:** #871

### 3. Key Changes

- `LitellmConverter.convert_message_to_openai`: add `reasoing_content`
-  `Converter.items_to_messages`: just pass the reasoning item
Automated update of translated documentation
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
So far, we've been assuming that audio is played:
- immediately (i.e. with 0 delay/latency)
- at realtime

This causes issues with our interrupt tracking. The model wants to know
how much audio the user has actually heard. For example in a phone call
agent, this wouldn't work (bc theres a delay of a few hundred ms between
model sending audio and the user hearing it). This PR allows you to pass
a playback tracker.






---
[//]: # (BEGIN SAPLING FOOTER)
* #1252
* #1216
* #1243
* __->__ #1242
damianoneill and others added 30 commits September 25, 2025 14:03
- This PR was started from [PR 1606: Tool
Guardrails](#1606)
- It adds input and output guardrails at the tool level which can
trigger `ToolInputGuardrailTripwireTriggered` and
`ToolOutputGuardrailTripwireTriggered` exceptions
- It includes updated documentation, a runnable example, and unit tests
- `make check` and unit tests all pass

## Edits since last review:
- Extracted nested tool running logic in `_run_impl.py`
- Added rejecting tool call or tool call output and returning a message
to the model (rather than only raising an exception)
- Added the tool guardrail results to the `RunResult`
- Removed docs
Co-authored-by: Kazuhiro Sera <[email protected]>
This pull request resolves
#1867 This is a
blocker for developers trying ChatKit Python server SDK, so we should
release a new version including this bump as early as possible.

I've confirmed migrating to 2.x major version does not bring any
incompatibility issues with examples in this repo.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.